35 research outputs found
Computing resource allocation in three-tier IoT fog networks: a joint optimization approach combining Stackelberg game and matching
Fog computing is a promising architecture to
provide economical and low latency data services for future
Internet of Things (IoT)-based network systems. Fog computing
relies on a set of low-power fog nodes (FNs) that are located
close to the end users to offload the services originally targeting
at cloud data centers. In this paper, we consider a specific
fog computing network consisting of a set of data service operators
(DSOs) each of which controls a set of FNs to provide the
required data service to a set of data service subscribers (DSSs).
How to allocate the limited computing resources of FNs to all
the DSSs to achieve an optimal and stable performance is an
important problem. Therefore, we propose a joint optimization
framework for all FNs, DSOs, and DSSs to achieve the optimal
resource allocation schemes in a distributed fashion. In the
framework, we first formulate a Stackelberg game to analyze
the pricing problem for the DSOs as well as the resource allocation
problem for the DSSs. Under the scenarios that the DSOs
can know the expected amount of resource purchased by the
DSSs, a many-to-many matching game is applied to investigate
the pairing problem between DSOs and FNs. Finally, within the
same DSO, we apply another layer of many-to-many matching
between each of the paired FNs and serving DSSs to solve
the FN-DSS pairing problem. Simulation results show that our
proposed framework can significantly improve the performance
of the IoT-based network systems
Transfer-Learning-Based Approach for the Diagnosis of Lung Diseases from Chest X-ray Images
Using chest X-ray images is one of the least expensive and easiest ways to diagnose patients
who suffer from lung diseases such as pneumonia and bronchitis. Inspired by existing work, a deep
learning model is proposed to classify chest X-ray images into 14 lung-related pathological conditions.
However, small datasets are not sufficient to train the deep learning model. Two methods were
used to tackle this: (1) transfer learning based on two pretrained neural networks, DenseNet and
ResNet, was employed; (2) data were preprocessed, including checking data leakage, handling class
imbalance, and performing data augmentation, before feeding the neural network. The proposed
model was evaluated according to the classification accuracy and receiver operating characteristic
(ROC) curves, as well as visualized by class activation maps. DenseNet121 and ResNet50 were used
in the simulations, and the results showed that the model trained by DenseNet121 had better accuracy
than that trained by ResNet50
Realistic Peer-to-Peer Energy Trading Model for Microgrids Using Deep Reinforcement Learning
In this paper, we integrate deep reinforcement learning with our realistic peer-to-peer (P2P) energy trading model to address a decision-making problem for microgrids (MGs) in the local energy market. First, an hour-ahead P2P energy trading model with a set of critical physical constraints is formed. Then, the decision-making process of energy trading is built as a Markov decision process, which is used to find the optimal strategies for MGs using a deep reinforcement learning (DRL) algorithm. Specifically, a modified deep Q-network (DQN) algorithm helps the MGs to utilise their resources and make better strategies. Finally, we choose several real-world electricity data sets to perform the simulations. The DQN-based energy trading strategies improve the utilities of the MGs and significantly reduce the power plant schedule with a virtual penalty function. Moreover, the model can determine the best battery for the selected MG. The results show that this P2P energy trading model can be applied to real-world situations
Joint Resource Allocation and Power Control in Heterogeneous Cellular Networks for Smart Grids
The smart grid communication plays a pivotal role in coordinating energy generation, energy transmission, and energy distribution. Cellular technology with long-term evolution (LTE)-based standards has been a preference for smart grid communication networks. However, conventional cellular networks could suffer from radio access network (RAN) congestion when many smart grid devices attempt access simultaneously. Heterogeneous cellular networks (HetNets) are proposed as important techniques to solve this problem because HetNets can alleviate the RAN congestion by off-loading access attempt from a macrocell to small cells. In smart grid, real-time data from phasor measurement units (PMUs) has a stringent delay requirement in order to ensure the stability of the grid. In this paper, we propose a joint resource allocation and power control scheme to improve the end-to-end delay in HetNets by taking into account the simultaneous transmission of PMUs. We formulate the optimization problem as a mixed integer problem and adopt a game-theoretic approach and the best response dynamics algorithm to solve the problem. Simulation results show that the proposed scheme can significantly minimize the end-to-end delay compared to first-in first-out scheduling and round-robin scheduling schemes
Deep Reinforcement Learning-Assisted Federated Learning for Robust Short-term Utility Demand Forecasting in Electricity Wholesale Markets
Short-term load forecasting (STLF) plays a significant role in the operation
of electricity trading markets. Considering the growing concern of data
privacy, federated learning (FL) is increasingly adopted to train STLF models
for utility companies (UCs) in recent research. Inspiringly, in wholesale
markets, as it is not realistic for power plants (PPs) to access UCs' data
directly, FL is definitely a feasible solution of obtaining an accurate STLF
model for PPs. However, due to FL's distributed nature and intense competition
among UCs, defects increasingly occur and lead to poor performance of the STLF
model, indicating that simply adopting FL is not enough. In this paper, we
propose a DRL-assisted FL approach, DEfect-AwaRe federated soft actor-critic
(DearFSAC), to robustly train an accurate STLF model for PPs to forecast
precise short-term utility electricity demand. Firstly. we design a STLF model
based on long short-term memory (LSTM) using just historical load data and time
data. Furthermore, considering the uncertainty of defects occurrence, a deep
reinforcement learning (DRL) algorithm is adopted to assist FL by alleviating
model degradation caused by defects. In addition, for faster convergence of FL
training, an auto-encoder is designed for both dimension reduction and quality
evaluation of uploaded models. In the simulations, we validate our approach on
real data of Helsinki's UCs in 2019. The results show that DearFSAC outperforms
all the other approaches no matter if defects occur or not
Distributed resource allocation for data center networks: a hierarchical game approach
The increasing demand of data computing and storage for cloud-based services motivates the development and deployment of large-scale data centers. This paper studies the resource allocation problem for the data center networking system when multiple data center operators (DCOs) simultaneously serve multiple service subscribers (SSs). We formulate a hierarchical game to analyze this system where the DCOs and the SSs are regarded as the leaders and followers, respectively. In the proposed game, each SS selects its serving DCO with preferred price and purchases the optimal amount of resources for the SS's computing requirements. Based on the responses of the SSs' and the other DCOs', the DCOs decide their resource prices so as to receive the highest profit. When the coordination among DCOs is weak, we consider all DCOs are noncooperative with each other, and propose a sub-gradient algorithm for the DCOs to approach a sub-optimal solution of the game. When all DCOs are sufficiently coordinated, we formulate a coalition game among all DCOs and apply Kalai-Smorodinsky bargaining as a resource division approach to achieve high utilities. Both solutions constitute the Stackelberg Equilibrium. The simulation results verify the performance improvement provided by our proposed approaches
Customer Baseline Load Estimation for Incentive-Based Demand Response Using Long Short-Term Memory Recurrent Neural Network
The transition to an intelligent, reliable and efficient smart grid with a high penetration of renewable energy drives the need to maximise the utilization of customers demand response potential. The availability of smart meter data means this potential can be more accurately estimated and suitable demand response (DR) programs can be targeted to customers for load shifting, clipping and reduction. In this paper, we focus on estimating customer demand baseline for incentive-based DR. We propose a long short-term memory recurrent neural network framework for customer baseline estimation using previous like days data during DR events period. We test the proposed methodology on the publicly available Irish smart meter data and results shows a significant increase in baseline estimation accuracy when compared to traditional baseline estimation methods
Co-Optimizing Battery Storage for Energy Arbitrage and Frequency Regulation in Real-Time Markets Using Deep Reinforcement Learning
Battery energy storage systems (BESSs) play a critical role in eliminating uncertainties associated with renewable energy generation, to maintain stability and improve flexibility of power networks. In this paper, a BESS is used to provide energy arbitrage (EA) and frequency regulation (FR) services simultaneously to maximize its total revenue within the physical constraints. The EA and FR actions are taken at different timescales. The multitimescale problem is formulated as two nested Markov decision process (MDP) submodels. The problem is a complex decision-making problem with enormous high-dimensional data and uncertainty (e.g., the price of the electricity). Therefore, a novel co-optimization scheme is proposed to handle the multitimescale problem, and also coordinate EA and FR services. A triplet deep deterministic policy gradient with exploration noise decay (TDD-ND) approach is used to obtain the optimal policy at each timescale. Simulations are conducted with real-time electricity prices and regulation signals data from the American PJM regulation market. The simulation results show that the proposed approach performs better than other studied policies in literature
Wireless ad-hoc control networks
A new concept called wireless ad-hoc control networks (WACNets), designed for distributed and remote monitoring and control is suggested in this work. Such systems represent the next stage in the evolution of distributed control and monitoring. WACNet explores a framework for organic, evolutionary and scalable method of integrating a large number of nodes with sensing and/or actuation, local intelligence and control, data processing and communication capabilities. The concept is introduced and the design of the network is presented. As an essential element in the operation of WACNet, service discovery developed for WACNet will be described and progress made so far will be reported
Wireless ad-hoc control networks
Control systems have gone through major changes over the last decades, evolving from fully analogue systems to distributed digital controllers. The initial architecture of the computer control systems was central, as all the sensors and actuators were connected to a high performance computer. As the number of sensors and actuators deployed in the control systems increased significantly, this approach revealed many weaknesses such as high cost, poor reliability, poor maintainability, and limited extensibility. With the emergence of low cost microprocessors, the computer controlled system adopted a more distributed architecture. Direct Digital Control (DDC) was the first distributed control system formally recognised in the literature and industry. Currently, control networks represent the latest development of the control systems architecture in industry. They combine localized intelligent digital controllers networked to a supervisory computer for data exchange and synchronization. A new concept called Wireless ad-hoc control networks (WACNets), as the next stage in the evolution of control systems architecture is studied in this thesis. Such systems consist of a large number of nodes with sensing and/or actuation, local intelligence and control, and data processing and communication components. The size, number, density, capabilities and location-dependency of such nodes are determined by the specific application for which the nodes are employed. The protocols and algorithms that run on the nodes could also provide self-organizing and cooperative capabilities for random deployment of the nodes. In addition to the development of a conceptual model for WACNets, a test-bed for validation of the concept based on IEEE 1451 compliant Smart Sensor and Bluetooth standard is designed and developed. In order to validate the test-bed and the concept, a monitor is also developed which provides interaction and communication with the nodes in the network from a host computer. The other contributions of the thesis include the design and development of a service discovery protocol based on Bluetooth Standard, a suite of software driving the test-bed, and validation of the test-bed. The results of the validation as presented in the thesis are quite encouraging and strongly indicate that the approach is feasible